AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
AWQ optimization

# AWQ optimization

Qwen3 14B 4bit AWQ
Apache-2.0
Qwen3-14B-4bit-AWQ is an MLX-format model converted from Qwen/Qwen3-14B, using AWQ quantization technology to compress the model to 4bit, suitable for efficient inference on the MLX framework.
Large Language Model
Q
mlx-community
252
2
Qwen3 8B 4bit AWQ
Apache-2.0
Qwen3-8B-4bit-AWQ is a 4-bit AWQ quantized version converted from Qwen/Qwen3-8B, suitable for text generation tasks in the MLX framework.
Large Language Model
Q
mlx-community
1,682
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase